2025-06-13 13:53:30,546 [ 48527 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-06-13 13:53:30,547 [ 48527 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:79, check_args_and_update_paths) 2025-06-13 13:53:30,547 [ 48527 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:90, check_args_and_update_paths) 2025-06-13 13:53:30,547 [ 48527 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:92, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_2uhobr --privileged --dns-search='.' --memory=30709026816 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 --report-log=parallel0_0.jsonl --report-log-exclude-logs-on-passed-tests test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 'test_backup_restore_storage_policy/test.py::test_storage_policies[None--default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2]' test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column test_block_structure_mismatch/test.py::test test_cancel_freeze/test.py::test_cancel_backup test_check_table/test.py::test_check_all_tables 'test_check_table/test.py::test_check_normal_table_corruption[]' 'test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin]' 'test_check_table/test.py::test_check_replicated_table_simple[-_0]' test_cluster_all_replicas/test.py::test_cluster 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes]' test_cluster_all_replicas/test.py::test_global_in 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes]' test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 test_config_xml_yaml_mix/test.py::test_extra_yaml_mix test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_local'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_object_storage_local_plain'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_s3_plain'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[File('\"'\"'test_database_backup_file'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_local'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_object_storage_local_plain'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_s3_plain'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[File('\"'\"'test_table_backup_file'\"'\"')]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed]' 'test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True]' test_dictionaries_config_reload/test.py::test test_dictionaries_ddl/test.py::test_clickhouse_remote test_dictionaries_ddl/test.py::test_conflicting_name 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed]' test_dictionaries_ddl/test.py::test_dictionary_with_where test_dictionaries_ddl/test.py::test_file_dictionary_restrictions test_dictionaries_ddl/test.py::test_http_dictionary_restrictions test_dictionaries_ddl/test.py::test_named_collection test_dictionaries_ddl/test.py::test_restricted_database test_dictionaries_ddl/test.py::test_secure test_dictionaries_ddl/test.py::test_with_insert_query test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation test_disk_configuration/test.py::test_merge_tree_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_disk_setting test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_setting_override 'test_distributed_ddl/test.py::test_allowed_databases[configs]' 'test_distributed_ddl/test.py::test_allowed_databases[configs_secure]' 'test_distributed_ddl/test.py::test_create_as_select[configs]' 'test_distributed_ddl/test.py::test_create_as_select[configs_secure]' 'test_distributed_ddl/test.py::test_create_reserved[configs]' 'test_distributed_ddl/test.py::test_create_reserved[configs_secure]' 'test_distributed_ddl/test.py::test_create_view[configs]' 'test_distributed_ddl/test.py::test_create_view[configs_secure]' 'test_distributed_ddl/test.py::test_default_database[configs]' 'test_distributed_ddl/test.py::test_default_database[configs_secure]' 'test_distributed_ddl/test.py::test_detach_query[configs]' 'test_distributed_ddl/test.py::test_detach_query[configs_secure]' 'test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs]' 'test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure]' 'test_distributed_ddl/test.py::test_implicit_macros[configs]' 'test_distributed_ddl/test.py::test_implicit_macros[configs_secure]' 'test_distributed_ddl/test.py::test_kill_query[configs]' 'test_distributed_ddl/test.py::test_kill_query[configs_secure]' 'test_distributed_ddl/test.py::test_macro[configs]' 'test_distributed_ddl/test.py::test_macro[configs_secure]' -vvv " altinityinfra/integration-tests-runner:ad96270260ff '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [100 items] scheduling tests via LoadFileScheduling test_dictionaries_ddl/test.py::test_clickhouse_remote test_distributed_ddl/test.py::test_allowed_databases[configs] test_attach_partition_using_copy/test.py::test_all_replicated test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] test_cluster_all_replicas/test.py::test_cluster test_check_table/test.py::test_check_all_tables test_accept_invalid_certificate/test.py::test_accept test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default [gw5] [ 1%] PASSED test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept [gw5] [ 2%] PASSED test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default [gw5] [ 3%] PASSED test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject [gw5] [ 4%] PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject [gw5] [ 5%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw5] [ 6%] PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config [gw3] [ 7%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] [gw7] [ 8%] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 [gw3] [ 9%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] test_disk_configuration/test.py::test_merge_tree_custom_disk_setting [gw7] [ 10%] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 [gw3] [ 11%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] [gw3] [ 12%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] [gw7] [ 13%] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached [gw6] [ 14%] PASSED test_cluster_all_replicas/test.py::test_cluster test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] [gw3] [ 15%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] [gw1] [ 16%] PASSED test_distributed_ddl/test.py::test_allowed_databases[configs] test_distributed_ddl/test.py::test_create_as_select[configs] [gw3] [ 17%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] [gw1] [ 18%] PASSED test_distributed_ddl/test.py::test_create_as_select[configs] test_distributed_ddl/test.py::test_create_reserved[configs] [gw9] [ 19%] PASSED test_check_table/test.py::test_check_all_tables test_check_table/test.py::test_check_normal_table_corruption[] [gw6] [ 20%] PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] [gw1] [ 21%] PASSED test_distributed_ddl/test.py::test_create_reserved[configs] test_distributed_ddl/test.py::test_create_view[configs] [gw3] [ 22%] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] [gw0] [ 23%] PASSED test_dictionaries_ddl/test.py::test_clickhouse_remote test_dictionaries_ddl/test.py::test_conflicting_name [gw7] [ 24%] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default [gw0] [ 25%] PASSED test_dictionaries_ddl/test.py::test_conflicting_name test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] [gw6] [ 26%] PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] test_cluster_all_replicas/test.py::test_global_in [gw6] [ 27%] PASSED test_cluster_all_replicas/test.py::test_global_in test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] [gw7] [ 28%] PASSED test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 [gw7] [ 29%] PASSED test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 [gw1] [ 30%] PASSED test_distributed_ddl/test.py::test_create_view[configs] test_distributed_ddl/test.py::test_default_database[configs] [gw9] [ 31%] PASSED test_check_table/test.py::test_check_normal_table_corruption[] test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] [gw6] [ 32%] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] [gw1] [ 33%] PASSED test_distributed_ddl/test.py::test_default_database[configs] test_distributed_ddl/test.py::test_detach_query[configs] [gw2] [ 34%] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact [gw1] [ 35%] PASSED test_distributed_ddl/test.py::test_detach_query[configs] test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] [gw1] [ 36%] PASSED test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] test_distributed_ddl/test.py::test_implicit_macros[configs] [gw1] [ 37%] PASSED test_distributed_ddl/test.py::test_implicit_macros[configs] test_distributed_ddl/test.py::test_kill_query[configs] [gw1] [ 38%] PASSED test_distributed_ddl/test.py::test_kill_query[configs] test_distributed_ddl/test.py::test_macro[configs] [gw2] [ 39%] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] [gw1] [ 40%] PASSED test_distributed_ddl/test.py::test_macro[configs] test_distributed_ddl/test.py::test_allowed_databases[configs_secure] [gw2] [ 41%] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] [gw6] [ 42%] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] [gw7] [ 43%] PASSED test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide [gw5] [ 44%] PASSED test_disk_configuration/test.py::test_merge_tree_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_disk_setting [gw0] [ 45%] PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] [gw2] [ 46%] PASSED test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] [gw7] [ 47%] PASSED test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column [gw5] [ 48%] PASSED test_disk_configuration/test.py::test_merge_tree_disk_setting test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] [gw2] [ 49%] PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] [gw3] [ 50%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] [gw5] [ 51%] PASSED test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_setting_override [gw5] [ 52%] PASSED test_disk_configuration/test.py::test_merge_tree_setting_override [gw2] [ 53%] PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] [gw1] [ 54%] PASSED test_distributed_ddl/test.py::test_allowed_databases[configs_secure] test_distributed_ddl/test.py::test_create_as_select[configs_secure] [gw2] [ 55%] PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] [gw1] [ 56%] PASSED test_distributed_ddl/test.py::test_create_as_select[configs_secure] test_distributed_ddl/test.py::test_create_reserved[configs_secure] [gw1] [ 57%] PASSED test_distributed_ddl/test.py::test_create_reserved[configs_secure] test_distributed_ddl/test.py::test_create_view[configs_secure] [gw0] [ 58%] PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] [gw3] [ 59%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] [gw2] [ 60%] PASSED test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message [gw1] [ 61%] PASSED test_distributed_ddl/test.py::test_create_view[configs_secure] test_distributed_ddl/test.py::test_default_database[configs_secure] [gw1] [ 62%] PASSED test_distributed_ddl/test.py::test_default_database[configs_secure] test_distributed_ddl/test.py::test_detach_query[configs_secure] [gw1] [ 63%] PASSED test_distributed_ddl/test.py::test_detach_query[configs_secure] test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] [gw1] [ 64%] PASSED test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] test_distributed_ddl/test.py::test_implicit_macros[configs_secure] [gw1] [ 65%] PASSED test_distributed_ddl/test.py::test_implicit_macros[configs_secure] test_distributed_ddl/test.py::test_kill_query[configs_secure] [gw1] [ 66%] PASSED test_distributed_ddl/test.py::test_kill_query[configs_secure] test_distributed_ddl/test.py::test_macro[configs_secure] [gw5] [ 67%] PASSED test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation test_dictionaries_config_reload/test.py::test [gw0] [ 68%] PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] [gw1] [ 69%] PASSED test_distributed_ddl/test.py::test_macro[configs_secure] [gw3] [ 70%] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] [gw7] [ 71%] PASSED test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition test_block_structure_mismatch/test.py::test [gw2] [ 72%] PASSED test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation [gw4] [ 73%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids [gw7] [ 74%] PASSED test_block_structure_mismatch/test.py::test [gw4] [ 75%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container [gw0] [ 76%] PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] test_dictionaries_ddl/test.py::test_dictionary_with_where [gw4] [ 77%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_cancel_freeze/test.py::test_cancel_backup [gw0] [ 78%] PASSED test_dictionaries_ddl/test.py::test_dictionary_with_where test_dictionaries_ddl/test.py::test_file_dictionary_restrictions [gw5] [ 79%] PASSED test_dictionaries_config_reload/test.py::test [gw0] [ 80%] PASSED test_dictionaries_ddl/test.py::test_file_dictionary_restrictions test_dictionaries_ddl/test.py::test_http_dictionary_restrictions [gw4] [ 81%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 [gw0] [ 82%] PASSED test_dictionaries_ddl/test.py::test_http_dictionary_restrictions test_dictionaries_ddl/test.py::test_named_collection [gw0] [ 83%] PASSED test_dictionaries_ddl/test.py::test_named_collection test_dictionaries_ddl/test.py::test_restricted_database [gw4] [ 84%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 [gw4] [ 85%] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 [gw8] [ 86%] FAILED test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree [gw0] [ 87%] PASSED test_dictionaries_ddl/test.py::test_restricted_database test_dictionaries_ddl/test.py::test_secure [gw0] [ 88%] PASSED test_dictionaries_ddl/test.py::test_secure test_dictionaries_ddl/test.py::test_with_insert_query [gw0] [ 89%] PASSED test_dictionaries_ddl/test.py::test_with_insert_query [gw6] [ 90%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] [gw3] [ 91%] PASSED test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition [gw7] [ 92%] PASSED test_cancel_freeze/test.py::test_cancel_backup [gw6] [ 93%] PASSED test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] test_config_xml_yaml_mix/test.py::test_extra_yaml_mix [gw6] [ 94%] PASSED test_config_xml_yaml_mix/test.py::test_extra_yaml_mix [gw8] [ 95%] FAILED test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk [gw9] [ 96%] PASSED test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] test_check_table/test.py::test_check_replicated_table_simple[-_0] [gw9] [ 97%] PASSED test_check_table/test.py::test_check_replicated_table_simple[-_0] test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field [gw9] [ 98%] PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field [gw8] [ 99%] FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated [gw8] [100%] FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated =================================== FAILURES =================================== _____________________________ test_all_replicated ______________________________ [gw8] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_all_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", True) test_attach_partition_using_copy/test.py:128: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f587ff60ca0> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.3): E Code: 198. DB::Exception: Received from 172.16.6.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:106: Poco::Exception::Exception(String const&, int) @ 0x00000000381ceaf1 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc263d1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bbafa6f E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bba8c1e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bba5ba8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bba65e0 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bba7327 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c3c23a0 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c3beb81 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c3be68d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c3be054 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c3c7810 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c3c738d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c3c218b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a9e08 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a8190 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c3d2134 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021098ef2 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000002109957c E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x000000002109a5fb E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002109ff18 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000021094f71 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002109cc23 E 23. DB::ReadBuffer::next() @ 0x000000000c50120b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000028536002 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a95f E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a290 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x00000000285323c6 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x000000002843e75e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:382: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eddfcd0 E 30. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:414: DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(DB::TableZnodeInfo const&, DB::LoadingStrictnessLevel, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>, bool, DB::ZooKeeperRetriesInfo const&) @ 0x000000002e38201c E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::TableZnodeInfo&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, bool&, DB::ZooKeeperRetriesInfo&, 0>(std::allocator const&, DB::TableZnodeInfo&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&, bool&, DB::ZooKeeperRetriesInfo&) @ 0x000000002f592581 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3634: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-06-13 13:53:37 [ 688 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-06-13 13:53:37 [ 688 ] INFO : Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py (cluster.py:2672, start) 2025-06-13 13:53:37 [ 688 ] DEBUG : Cluster start called. is_up=False (cluster.py:2679, start) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw8 are NETWORK ID NAME DRIVER SCOPE (cluster.py:825, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:833, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw8 are DRIVER VOLUME NAME (cluster.py:841, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Cleanup called (cluster.py:846, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw8 are NETWORK ID NAME DRIVER SCOPE (cluster.py:825, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:833, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw8 are DRIVER VOLUME NAME (cluster.py:841, print_all_docker_pieces) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Unstopped containers: {} (cluster.py:860, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw8 (cluster.py:874, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Trying to prune unused networks... (cluster.py:880, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Trying to prune unused images... (cluster.py:896, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:147, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Exitcode:1 (cluster.py:149, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Trying to prune unused volumes... (cluster.py:905, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-06-13 13:53:37 [ 688 ] DEBUG : Volumes pruned: 1 (cluster.py:910, cleanup) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup directory for instance: replica1 (cluster.py:2692, start) 2025-06-13 13:53:37 [ 688 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4536, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Create directory for common tests configuration (cluster.py:4541, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Copy common configuration from helpers (cluster.py:4561, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Generate and write macros file (cluster.py:4613, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/configs/config.d (cluster.py:4649, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/database (cluster.py:4666, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/logs (cluster.py:4677, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4758, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup directory for instance: replica2 (cluster.py:2692, start) 2025-06-13 13:53:37 [ 688 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4536, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Create directory for common tests configuration (cluster.py:4541, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Copy common configuration from helpers (cluster.py:4561, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Generate and write macros file (cluster.py:4613, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/configs/config.d (cluster.py:4649, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/database (cluster.py:4666, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/logs (cluster.py:4677, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4758, create_dir) 2025-06-13 13:53:37 [ 688 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:5ccda723c1fc', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env (cluster.py:96, _create_env_file) 2025-06-13 13:53:37 [ 688 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-13 13:53:37 [ 688 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-13 13:53:37 [ 688 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-06-13 13:53:37 [ 688 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-06-13 13:53:37 [ 688 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-06-13 13:53:37 [ 688 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --project-name roottestattachpartitionusingcopy-gw8 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: replica1 Skipped - Image is already being pulled by replica2 (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: zoo1 Skipped - Image is already being pulled by replica2 (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: zoo2 Skipped - Image is already being pulled by replica2 (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: zoo3 Skipped - Image is already being pulled by replica2 (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: replica2 Pulling (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Stderr: replica2 Pulled (cluster.py:147, run_and_check) 2025-06-13 13:53:48 [ 688 ] DEBUG : Setup ZooKeeper (cluster.py:2733, start) 2025-06-13 13:53:48 [ 688 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/keeper3/coordination'] (cluster.py:2734, start) 2025-06-13 13:53:48 [ 688 ] DEBUG : Command:[docker compose --project-name roottestattachpartitionusingcopy-gw8 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:121, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr:time="2025-06-13T13:53:48Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw8_default Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw8_default Created (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Created (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Created (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Created (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Starting (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Starting (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Starting (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Started (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Started (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Started (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr:time="2025-06-13T13:53:51Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Stderr:time="2025-06-13T13:53:51Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-06-13 13:53:51 [ 688 ] DEBUG : Wait ZooKeeper to start (cluster.py:2398, wait_zookeeper_to_start) 2025-06-13 13:53:51 [ 688 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:1999, get_instance_ip) 2025-06-13 13:53:51 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:51 [ 688 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.6.4, port:2181, use_ssl:False (cluster.py:3234, get_kazoo_client) 2025-06-13 13:53:51 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:51 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:51 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:51 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:51 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:51 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:51 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:51 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:52 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:52 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:53 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:53 [ 688 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] INFO : Connecting to 172.16.6.4(172.16.6.4):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-06-13 13:53:55 [ 688 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:1999, get_instance_ip) 2025-06-13 13:53:55 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:55 [ 688 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.6.2, port:2181, use_ssl:False (cluster.py:3234, get_kazoo_client) 2025-06-13 13:53:55 [ 688 ] INFO : Connecting to 172.16.6.2(172.16.6.2):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-06-13 13:53:55 [ 688 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:1999, get_instance_ip) 2025-06-13 13:53:55 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:55 [ 688 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.6.3, port:2181, use_ssl:False (cluster.py:3234, get_kazoo_client) 2025-06-13 13:53:55 [ 688 ] INFO : Connecting to 172.16.6.3(172.16.6.3):2181, use_ssl: False (connection.py:650, _connect) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-06-13 13:53:55 [ 688 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-06-13 13:53:55 [ 688 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-06-13 13:53:55 [ 688 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-06-13 13:53:55 [ 688 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-06-13 13:53:55 [ 688 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2414, wait_zookeeper_nodes_to_start) 2025-06-13 13:53:55 [ 688 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --project-name roottestattachpartitionusingcopy-gw8 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/docker-compose.yml up -d --no-recreate') (cluster.py:3061, start) 2025-06-13 13:53:55 [ 688 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --project-name roottestattachpartitionusingcopy-gw8 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Running (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Running (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Running (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Creating (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Created (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Created (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Starting (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Starting (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Started (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Started (cluster.py:147, run_and_check) 2025-06-13 13:53:56 [ 688 ] DEBUG : ClickHouse instance created (cluster.py:3069, start) 2025-06-13 13:53:56 [ 688 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:1999, get_instance_ip) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2009, get_instance_global_ipv6) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : Waiting for ClickHouse start in replica1, ip: 172.16.6.6... (cluster.py:3077, start) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:56 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:57 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:58 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/4b8330de42bad4d6dbd6d1be7a0242f0962cad8edf9cb60757539d7aafca7f5f/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : ClickHouse replica1 started (cluster.py:3081, start) 2025-06-13 13:53:59 [ 688 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:1999, get_instance_ip) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2009, get_instance_global_ipv6) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : Waiting for ClickHouse start in replica2, ip: 172.16.6.5... (cluster.py:3077, start) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw8-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : http://localhost:None "GET /v1.46/containers/bcf3020bed60262149306ac169894e77d70d6274983b6ef816fb656f37bd3f07/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-06-13 13:53:59 [ 688 ] DEBUG : ClickHouse replica2 started (cluster.py:3081, start) ------------------------------ Captured log call ------------------------------- 2025-06-13 13:53:59 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:53:59 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:54:00 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:54:00 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:54:00 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 13:54:55 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 13:55:51 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) _____________________________ test_both_mergetree ______________________________ [gw8] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_both_mergetree(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:106: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f587ff60ca0> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.3): E Code: 198. DB::Exception: Received from 172.16.6.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:106: Poco::Exception::Exception(String const&, int) @ 0x00000000381ceaf1 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc263d1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bbafa6f E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bba8c1e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bba5ba8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bba65e0 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bba7327 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c3c23a0 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c3beb81 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c3be68d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c3be054 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c3c7810 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c3c738d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c3c218b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a9e08 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a8190 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c3d2134 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021098ef2 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000002109957c E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x000000002109a5fb E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002109ff18 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000021094f71 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002109cc23 E 23. DB::ReadBuffer::next() @ 0x000000000c50120b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000028536002 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a95f E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a290 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x00000000285323c6 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x000000002843e75e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:382: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eddfcd0 E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000002f593396 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000002f5929f6 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3634: Exception ------------------------------ Captured log call ------------------------------- 2025-06-13 13:56:46 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:56:47 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:56:47 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:56:47 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:56:47 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 13:57:44 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 13:58:41 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) _______________________ test_not_work_on_different_disk ________________________ [gw8] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_not_work_on_different_disk(start_cluster): cleanup([replica1, replica2]) # Replace and move should not work on replace > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f587ff60ca0> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.3): E Code: 198. DB::Exception: Received from 172.16.6.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:106: Poco::Exception::Exception(String const&, int) @ 0x00000000381ceaf1 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc263d1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bbafa6f E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bba8c1e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bba5ba8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bba65e0 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bba7327 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c3c23a0 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c3beb81 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c3be68d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c3be054 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c3c7810 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c3c738d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c3c218b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a9e08 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a8190 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c3d2134 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021098ef2 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000002109957c E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x000000002109a5fb E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002109ff18 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000021094f71 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002109cc23 E 23. DB::ReadBuffer::next() @ 0x000000000c50120b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000028536002 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a95f E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a290 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x00000000285323c6 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x000000002843e75e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:382: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eddfcd0 E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000002f593396 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000002f5929f6 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3634: Exception ------------------------------ Captured log call ------------------------------- 2025-06-13 13:59:36 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:59:36 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3570, query) 2025-06-13 13:59:37 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:59:37 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3570, query) 2025-06-13 13:59:37 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 14:00:32 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 14:01:29 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) _______________________ test_only_destination_replicated _______________________ [gw8] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_only_destination_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:163: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f587ff60ca0> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.3): E Code: 198. DB::Exception: Received from 172.16.6.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:106: Poco::Exception::Exception(String const&, int) @ 0x00000000381ceaf1 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000001bc263d1 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x000000001bbafa6f E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x000000001bba8c1e E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x000000001bba5ba8 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x000000001bba65e0 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x000000001bba7327 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x000000001c3c23a0 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x000000001c3beb81 E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x000000001c3be68d E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x000000001c3be054 E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x000000001c3c7810 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x000000001c3c738d E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x000000001c3c218b E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a9e08 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x000000001c3a8190 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x000000001c3d2134 E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000021098ef2 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x000000002109957c E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x000000002109a5fb E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x000000002109ff18 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000021094f71 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x000000002109cc23 E 23. DB::ReadBuffer::next() @ 0x000000000c50120b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x0000000028536002 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a95f E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000002853a290 E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x00000000285323c6 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x000000002843e75e E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:382: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000002eddfcd0 E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000002f593396 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000002f5929f6 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3634: Exception ------------------------------ Captured log call ------------------------------- 2025-06-13 14:02:27 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3570, query) 2025-06-13 14:02:27 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3570, query) 2025-06-13 14:02:28 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3570, query) 2025-06-13 14:02:28 [ 688 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3570, query) 2025-06-13 14:02:28 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 14:03:22 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) 2025-06-13 14:04:17 [ 688 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3570, query) ---------------------------- Captured log teardown ----------------------------- 2025-06-13 14:05:14 [ 688 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --project-name roottestattachpartitionusingcopy-gw8 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/.env --project-name roottestattachpartitionusingcopy-gw8 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-0-gw8/replica2/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica2-1 Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-replica1-1 Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo3-1 Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo2-1 Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw8-zoo1-1 Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw8_default Removing (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw8_default Removed (cluster.py:147, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Cleanup called (cluster.py:846, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw8 are NETWORK ID NAME DRIVER SCOPE (cluster.py:825, print_all_docker_pieces) 2025-06-13 14:05:22 [ 688 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw8 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:833, print_all_docker_pieces) 2025-06-13 14:05:22 [ 688 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw8 are DRIVER VOLUME NAME (cluster.py:841, print_all_docker_pieces) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw8-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Unstopped containers: {} (cluster.py:860, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw8 (cluster.py:874, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Trying to prune unused networks... (cluster.py:880, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Trying to prune unused images... (cluster.py:896, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Images pruned (cluster.py:899, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Trying to prune unused volumes... (cluster.py:905, cleanup) 2025-06-13 14:05:22 [ 688 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-06-13 14:05:22 [ 688 ] DEBUG : Volumes pruned: 1 (cluster.py:910, cleanup) ----------------- generated report log file: parallel0_0.jsonl ----------------- ============================== slowest durations =============================== 315.46s call test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] 173.50s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore 170.65s call test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 169.76s call test_attach_partition_using_copy/test.py::test_both_mergetree 167.32s call test_attach_partition_using_copy/test.py::test_only_destination_replicated 166.93s call test_attach_partition_using_copy/test.py::test_all_replicated 85.00s call test_cancel_freeze/test.py::test_cancel_backup 79.35s setup test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] 75.85s setup test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column 46.62s setup test_distributed_ddl/test.py::test_allowed_databases[configs_secure] 46.44s call test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] 45.87s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 44.89s call test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] 41.70s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 38.81s setup test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] 37.03s call test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] 36.45s call test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] 36.06s setup test_dictionaries_ddl/test.py::test_clickhouse_remote 34.79s call test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] 33.89s call test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] 33.05s call test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition 30.53s setup test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] 30.10s call test_disk_configuration/test.py::test_merge_tree_custom_disk_setting 26.55s call test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 26.51s setup test_distributed_ddl/test.py::test_allowed_databases[configs] 26.09s setup test_cluster_all_replicas/test.py::test_cluster 25.81s teardown test_distributed_ddl/test.py::test_macro[configs_secure] 23.64s setup test_disk_configuration/test.py::test_merge_tree_custom_disk_setting 22.93s setup test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 22.32s call test_dictionaries_config_reload/test.py::test 22.05s teardown test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 21.99s teardown test_disk_configuration/test.py::test_merge_tree_setting_override 21.86s teardown test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] 21.82s setup test_attach_partition_using_copy/test.py::test_all_replicated 21.39s setup test_check_table/test.py::test_check_all_tables 19.96s setup test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition 19.67s setup test_dictionaries_config_reload/test.py::test 19.03s call test_config_xml_yaml_mix/test.py::test_extra_yaml_mix 18.79s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 17.98s call test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 17.18s setup test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 15.60s call test_distributed_ddl/test.py::test_create_view[configs_secure] 15.37s setup test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] 14.86s teardown test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 14.81s setup test_accept_invalid_certificate/test.py::test_accept 14.65s setup test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default 14.16s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 13.89s call test_check_table/test.py::test_check_normal_table_corruption[] 13.58s setup test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message 13.39s setup test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact 13.01s setup test_cancel_freeze/test.py::test_cancel_backup 12.78s call test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached 12.52s call test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] 12.36s call test_check_table/test.py::test_check_all_tables 12.00s teardown test_cancel_freeze/test.py::test_cancel_backup 11.57s call test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] 10.78s call test_distributed_ddl/test.py::test_macro[configs_secure] 10.65s setup test_block_structure_mismatch/test.py::test 10.62s call test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] 10.53s call test_distributed_ddl/test.py::test_create_view[configs] 10.53s call test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact 9.91s call test_disk_configuration/test.py::test_merge_tree_disk_setting 9.84s call test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] 9.58s call test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] 9.29s call test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting 8.93s call test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] 8.25s teardown test_dictionaries_ddl/test.py::test_with_insert_query 8.14s call test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] 8.04s call test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] 7.95s teardown test_attach_partition_using_copy/test.py::test_only_destination_replicated 7.47s teardown test_check_table/test.py::test_check_replicated_table_simple[-_0] 7.47s call test_check_table/test.py::test_check_replicated_table_simple[-_0] 7.29s call test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide 6.48s call test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 6.33s call test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 6.32s call test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 6.03s teardown test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 5.58s call test_dictionaries_ddl/test.py::test_secure 5.54s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 5.48s teardown test_accept_invalid_certificate/test.py::test_strict_reject_with_config 5.32s teardown test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition 5.05s teardown test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column 5.03s call test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 4.89s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 4.71s call test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation 4.54s call test_distributed_ddl/test.py::test_detach_query[configs] 4.53s call test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 4.52s teardown test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] 4.49s teardown test_block_structure_mismatch/test.py::test 4.39s call test_disk_configuration/test.py::test_merge_tree_setting_override 4.27s call test_distributed_ddl/test.py::test_macro[configs] 4.23s teardown test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message 3.97s call test_dictionaries_ddl/test.py::test_restricted_database 3.75s call test_distributed_ddl/test.py::test_default_database[configs_secure] 3.69s call test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default 3.68s call test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 3.64s call test_distributed_ddl/test.py::test_allowed_databases[configs] 3.52s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] 3.23s teardown test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field 3.19s call test_distributed_ddl/test.py::test_default_database[configs] 2.93s call test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] 2.87s call test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 2.86s call test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] 2.77s call test_dictionaries_ddl/test.py::test_clickhouse_remote 2.74s call test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] 2.60s call test_distributed_ddl/test.py::test_create_as_select[configs] 2.57s call test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default 2.56s call test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message 2.53s call test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] 2.50s call test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] 2.42s call test_distributed_ddl/test.py::test_allowed_databases[configs_secure] 2.38s call test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column 2.22s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 2.17s call test_distributed_ddl/test.py::test_detach_query[configs_secure] 2.08s call test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] 1.98s call test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] 1.91s call test_distributed_ddl/test.py::test_create_reserved[configs] 1.82s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore 1.74s call test_distributed_ddl/test.py::test_implicit_macros[configs_secure] 1.68s call test_cluster_all_replicas/test.py::test_cluster 1.64s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 1.62s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 1.50s call test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 1.45s call test_distributed_ddl/test.py::test_implicit_macros[configs] 1.42s call test_distributed_ddl/test.py::test_create_reserved[configs_secure] 1.40s teardown test_dictionaries_config_reload/test.py::test 1.31s call test_dictionaries_ddl/test.py::test_dictionary_with_where 1.29s teardown test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide 1.00s call test_distributed_ddl/test.py::test_create_as_select[configs_secure] 0.90s call test_dictionaries_ddl/test.py::test_conflicting_name 0.90s call test_block_structure_mismatch/test.py::test 0.90s call test_cluster_all_replicas/test.py::test_global_in 0.90s call test_dictionaries_ddl/test.py::test_http_dictionary_restrictions 0.86s call test_dictionaries_ddl/test.py::test_file_dictionary_restrictions 0.85s call test_dictionaries_ddl/test.py::test_named_collection 0.73s call test_dictionaries_ddl/test.py::test_with_insert_query 0.43s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] 0.37s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] 0.37s call test_distributed_ddl/test.py::test_kill_query[configs] 0.32s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] 0.32s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] 0.32s call test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] 0.32s call test_accept_invalid_certificate/test.py::test_connection_accept 0.32s call test_accept_invalid_certificate/test.py::test_accept 0.32s call test_distributed_ddl/test.py::test_kill_query[configs_secure] 0.27s call test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] 0.27s call test_accept_invalid_certificate/test.py::test_default 0.22s call test_accept_invalid_certificate/test.py::test_strict_reject 0.22s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] 0.22s call test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.22s teardown test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] 0.22s call test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore 0.00s teardown test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] 0.00s teardown test_distributed_ddl/test.py::test_allowed_databases[configs] 0.00s setup test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] 0.00s setup test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 0.00s teardown test_distributed_ddl/test.py::test_allowed_databases[configs_secure] 0.00s setup test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] 0.00s teardown test_dictionaries_ddl/test.py::test_clickhouse_remote 0.00s setup test_distributed_ddl/test.py::test_detach_query[configs] 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 0.00s setup test_distributed_ddl/test.py::test_create_as_select[configs] 0.00s setup test_config_xml_yaml_mix/test.py::test_extra_yaml_mix 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] 0.00s teardown test_check_table/test.py::test_check_all_tables 0.00s setup test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] 0.00s setup test_check_table/test.py::test_check_normal_table_corruption[] 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] 0.00s teardown test_cluster_all_replicas/test.py::test_cluster 0.00s teardown test_config_xml_yaml_mix/test.py::test_extra_yaml_mix 0.00s setup test_distributed_ddl/test.py::test_create_as_select[configs_secure] 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] 0.00s teardown test_disk_configuration/test.py::test_merge_tree_custom_disk_setting 0.00s setup test_dictionaries_ddl/test.py::test_conflicting_name 0.00s teardown test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 0.00s setup test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] 0.00s teardown test_attach_partition_using_copy/test.py::test_all_replicated 0.00s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 0.00s setup test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] 0.00s setup test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] 0.00s setup test_distributed_ddl/test.py::test_macro[configs] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] 0.00s setup test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] 0.00s teardown test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] 0.00s setup test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] 0.00s setup test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] 0.00s teardown test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] 0.00s setup test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] 0.00s setup test_distributed_ddl/test.py::test_default_database[configs_secure] 0.00s teardown test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 0.00s setup test_distributed_ddl/test.py::test_create_view[configs_secure] 0.00s setup test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting 0.00s teardown test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] 0.00s setup test_disk_configuration/test.py::test_merge_tree_setting_override 0.00s setup test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] 0.00s setup test_dictionaries_ddl/test.py::test_file_dictionary_restrictions 0.00s setup test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] 0.00s setup test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 0.00s setup test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] 0.00s teardown test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default 0.00s setup test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 0.00s setup test_distributed_ddl/test.py::test_create_reserved[configs] 0.00s setup test_distributed_ddl/test.py::test_detach_query[configs_secure] 0.00s setup test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] 0.00s setup test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] 0.00s teardown test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] 0.00s teardown test_distributed_ddl/test.py::test_default_database[configs] 0.00s teardown test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] 0.00s setup test_distributed_ddl/test.py::test_create_view[configs] 0.00s setup test_dictionaries_ddl/test.py::test_secure 0.00s setup test_disk_configuration/test.py::test_merge_tree_disk_setting 0.00s setup test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] 0.00s setup test_accept_invalid_certificate/test.py::test_strict_reject_with_config 0.00s setup test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] 0.00s teardown test_distributed_ddl/test.py::test_default_database[configs_secure] 0.00s setup test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] 0.00s teardown test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact 0.00s setup test_distributed_ddl/test.py::test_implicit_macros[configs_secure] 0.00s teardown test_check_table/test.py::test_check_normal_table_corruption[] 0.00s teardown test_dictionaries_ddl/test.py::test_named_collection 0.00s setup test_cluster_all_replicas/test.py::test_global_in 0.00s setup test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 0.00s setup test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached 0.00s setup test_dictionaries_ddl/test.py::test_http_dictionary_restrictions 0.00s teardown test_dictionaries_ddl/test.py::test_conflicting_name 0.00s setup test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 0.00s setup test_accept_invalid_certificate/test.py::test_strict_connection_reject 0.00s setup test_dictionaries_ddl/test.py::test_with_insert_query 0.00s setup test_distributed_ddl/test.py::test_kill_query[configs_secure] 0.00s teardown test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 0.00s setup test_distributed_ddl/test.py::test_default_database[configs] 0.00s teardown test_distributed_ddl/test.py::test_detach_query[configs_secure] 0.00s teardown test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] 0.00s teardown test_accept_invalid_certificate/test.py::test_accept 0.00s setup test_check_table/test.py::test_check_replicated_table_simple[-_0] 0.00s setup test_distributed_ddl/test.py::test_implicit_macros[configs] 0.00s setup test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s teardown test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] 0.00s teardown test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] 0.00s teardown test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] 0.00s setup test_distributed_ddl/test.py::test_macro[configs_secure] 0.00s setup test_accept_invalid_certificate/test.py::test_connection_accept 0.00s teardown test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting 0.00s teardown test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s setup test_accept_invalid_certificate/test.py::test_default 0.00s teardown test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] 0.00s teardown test_distributed_ddl/test.py::test_kill_query[configs] 0.00s setup test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 0.00s teardown test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default 0.00s teardown test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s setup test_distributed_ddl/test.py::test_kill_query[configs] 0.00s teardown test_dictionaries_ddl/test.py::test_restricted_database 0.00s setup test_dictionaries_ddl/test.py::test_restricted_database 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids 0.00s teardown test_distributed_ddl/test.py::test_detach_query[configs] 0.00s teardown test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] 0.00s teardown test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree 0.00s teardown test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] 0.00s teardown test_distributed_ddl/test.py::test_create_reserved[configs_secure] 0.00s teardown test_dictionaries_ddl/test.py::test_dictionary_with_where 0.00s teardown test_disk_configuration/test.py::test_merge_tree_disk_setting 0.00s teardown test_distributed_ddl/test.py::test_create_view[configs_secure] 0.00s teardown test_cluster_all_replicas/test.py::test_global_in 0.00s setup test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default 0.00s teardown test_accept_invalid_certificate/test.py::test_default 0.00s teardown test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached 0.00s teardown test_distributed_ddl/test.py::test_create_view[configs] 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_reject 0.00s setup test_dictionaries_ddl/test.py::test_dictionary_with_where 0.00s teardown test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] 0.00s setup test_dictionaries_ddl/test.py::test_named_collection 0.00s setup test_attach_partition_using_copy/test.py::test_only_destination_replicated 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container 0.00s setup test_distributed_ddl/test.py::test_create_reserved[configs_secure] 0.00s teardown test_distributed_ddl/test.py::test_create_as_select[configs] 0.00s setup test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 0.00s teardown test_distributed_ddl/test.py::test_kill_query[configs_secure] 0.00s setup test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s teardown test_dictionaries_ddl/test.py::test_secure 0.00s teardown test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 0.00s teardown test_distributed_ddl/test.py::test_macro[configs] 0.00s teardown test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] 0.00s teardown test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 0.00s teardown test_accept_invalid_certificate/test.py::test_connection_accept 0.00s teardown test_distributed_ddl/test.py::test_implicit_macros[configs_secure] 0.00s teardown test_dictionaries_ddl/test.py::test_http_dictionary_restrictions 0.00s teardown test_distributed_ddl/test.py::test_implicit_macros[configs] 0.00s teardown test_dictionaries_ddl/test.py::test_file_dictionary_restrictions 0.00s teardown test_distributed_ddl/test.py::test_create_reserved[configs] 0.00s teardown test_distributed_ddl/test.py::test_create_as_select[configs_secure] 0.00s teardown test_accept_invalid_certificate/test.py::test_strict_connection_reject =========================== short test summary info ============================ FAILED test_attach_partition_using_copy/test.py::test_all_replicated - Except... FAILED test_attach_partition_using_copy/test.py::test_both_mergetree - Except... FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated PASSED test_accept_invalid_certificate/test.py::test_accept PASSED test_accept_invalid_certificate/test.py::test_connection_accept PASSED test_accept_invalid_certificate/test.py::test_default PASSED test_accept_invalid_certificate/test.py::test_strict_connection_reject PASSED test_accept_invalid_certificate/test.py::test_strict_reject PASSED test_accept_invalid_certificate/test.py::test_strict_reject_with_config PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None--default] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default] PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 PASSED test_cluster_all_replicas/test.py::test_cluster PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1] PASSED test_distributed_ddl/test.py::test_allowed_databases[configs] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1] PASSED test_distributed_ddl/test.py::test_create_as_select[configs] PASSED test_check_table/test.py::test_check_all_tables PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes] PASSED test_distributed_ddl/test.py::test_create_reserved[configs] PASSED test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2] PASSED test_dictionaries_ddl/test.py::test_clickhouse_remote PASSED test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached PASSED test_dictionaries_ddl/test.py::test_conflicting_name PASSED test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes] PASSED test_cluster_all_replicas/test.py::test_global_in PASSED test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default PASSED test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 PASSED test_distributed_ddl/test.py::test_create_view[configs] PASSED test_check_table/test.py::test_check_normal_table_corruption[] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes] PASSED test_distributed_ddl/test.py::test_default_database[configs] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_local', 'test_database_backup')] PASSED test_distributed_ddl/test.py::test_detach_query[configs] PASSED test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs] PASSED test_distributed_ddl/test.py::test_implicit_macros[configs] PASSED test_distributed_ddl/test.py::test_kill_query[configs] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_object_storage_local_plain', 'test_database_backup')] PASSED test_distributed_ddl/test.py::test_macro[configs] PASSED test_database_backup/test.py::test_database_backup_database[Disk('backup_disk_s3_plain', 'test_database_backup')] PASSED test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes] PASSED test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact PASSED test_disk_configuration/test.py::test_merge_tree_custom_disk_setting PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache] PASSED test_database_backup/test.py::test_database_backup_database[File('test_database_backup_file')] PASSED test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide PASSED test_disk_configuration/test.py::test_merge_tree_disk_setting PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_local', 'test_table_backup')] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed] PASSED test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting PASSED test_disk_configuration/test.py::test_merge_tree_setting_override PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_object_storage_local_plain', 'test_table_backup')] PASSED test_distributed_ddl/test.py::test_allowed_databases[configs_secure] PASSED test_database_backup/test.py::test_database_backup_table[Disk('backup_disk_s3_plain', 'test_table_backup')] PASSED test_distributed_ddl/test.py::test_create_as_select[configs_secure] PASSED test_distributed_ddl/test.py::test_create_reserved[configs_secure] PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed] PASSED test_database_backup/test.py::test_database_backup_table[File('test_table_backup_file')] PASSED test_distributed_ddl/test.py::test_create_view[configs_secure] PASSED test_distributed_ddl/test.py::test_default_database[configs_secure] PASSED test_distributed_ddl/test.py::test_detach_query[configs_secure] PASSED test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure] PASSED test_distributed_ddl/test.py::test_implicit_macros[configs_secure] PASSED test_distributed_ddl/test.py::test_kill_query[configs_secure] PASSED test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache] PASSED test_distributed_ddl/test.py::test_macro[configs_secure] PASSED test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed] PASSED test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column PASSED test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore PASSED test_block_structure_mismatch/test.py::test PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids PASSED test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed] PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container PASSED test_dictionaries_ddl/test.py::test_dictionary_with_where PASSED test_dictionaries_config_reload/test.py::test PASSED test_dictionaries_ddl/test.py::test_file_dictionary_restrictions PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree PASSED test_dictionaries_ddl/test.py::test_http_dictionary_restrictions PASSED test_dictionaries_ddl/test.py::test_named_collection PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 PASSED test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 PASSED test_dictionaries_ddl/test.py::test_restricted_database PASSED test_dictionaries_ddl/test.py::test_secure PASSED test_dictionaries_ddl/test.py::test_with_insert_query PASSED test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False] PASSED test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition PASSED test_cancel_freeze/test.py::test_cancel_backup PASSED test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True] PASSED test_config_xml_yaml_mix/test.py::test_extra_yaml_mix PASSED test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin] PASSED test_check_table/test.py::test_check_replicated_table_simple[-_0] PASSED test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field =================== 4 failed, 96 passed in 708.52s (0:11:48) =================== Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 492, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_2uhobr --privileged --dns-search='.' --memory=30709026816 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=0 --color=no --durations=0 --report-log=parallel0_0.jsonl --report-log-exclude-logs-on-passed-tests test_accept_invalid_certificate/test.py::test_accept test_accept_invalid_certificate/test.py::test_connection_accept test_accept_invalid_certificate/test.py::test_default test_accept_invalid_certificate/test.py::test_strict_connection_reject test_accept_invalid_certificate/test.py::test_strict_reject test_accept_invalid_certificate/test.py::test_strict_reject_with_config test_asynchronous_metric_log_table/test.py::test_event_time_microseconds_field test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated test_backup_restore_azure_blob_storage/test.py::test_backup_restore test_backup_restore_azure_blob_storage/test.py::test_backup_restore_correct_block_ids test_backup_restore_azure_blob_storage/test.py::test_backup_restore_diff_container test_backup_restore_azure_blob_storage/test.py::test_backup_restore_on_merge_tree test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf1 test_backup_restore_azure_blob_storage/test.py::test_backup_restore_with_named_collection_azure_conf2 'test_backup_restore_storage_policy/test.py::test_storage_policies[None--default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[None-None-default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[None-policy1-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1--default]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-None-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy1-policy1]' 'test_backup_restore_storage_policy/test.py::test_storage_policies[policy1-policy2-policy2]' test_backward_compatibility/test_normalized_count_comparison.py::test_select_aggregate_alias_column test_block_structure_mismatch/test.py::test test_cancel_freeze/test.py::test_cancel_backup test_check_table/test.py::test_check_all_tables 'test_check_table/test.py::test_check_normal_table_corruption[]' 'test_check_table/test.py::test_check_replicated_table_corruption[-_0-.bin]' 'test_check_table/test.py::test_check_replicated_table_simple[-_0]' test_cluster_all_replicas/test.py::test_cluster 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_error_on_unavailable_replica[two_shards_three_nodes]' test_cluster_all_replicas/test.py::test_global_in 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[one_shard_three_nodes]' 'test_cluster_all_replicas/test.py::test_skip_unavailable_replica[two_shards_three_nodes]' test_compressed_marks_restart/test.py::test_compressed_marks_restart_compact test_compressed_marks_restart/test.py::test_compressed_marks_restart_wide test_concurrent_queries_for_all_users_restriction/test.py::test_exception_message test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_default test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_1 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_defined_50 test_concurrent_threads_soft_limit/test.py::test_concurrent_threads_soft_limit_limit_reached test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_default test_concurrent_threads_soft_limit/test.py::test_use_concurrency_control_soft_limit_defined_50 test_config_xml_yaml_mix/test.py::test_extra_yaml_mix test_consistant_parts_after_move_partition/test.py::test_consistent_part_after_move_partition 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_local'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_object_storage_local_plain'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[Disk('\"'\"'backup_disk_s3_plain'\"'\"', '\"'\"'test_database_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_database[File('\"'\"'test_database_backup_file'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_local'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_object_storage_local_plain'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[Disk('\"'\"'backup_disk_s3_plain'\"'\"', '\"'\"'test_table_backup'\"'\"')]' 'test_database_backup/test.py::test_database_backup_table[File('\"'\"'test_table_backup_file'\"'\"')]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_complex[complex_key_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_ranged[range_hashed]' 'test_dictionaries_all_layouts_separate_sources/test_executable_hashed.py::test_simple[hashed]' 'test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple[flat-False]' 'test_dictionaries_all_layouts_separate_sources/test_mongo_uri.py::test_simple_ssl[flat-True]' test_dictionaries_config_reload/test.py::test test_dictionaries_ddl/test.py::test_clickhouse_remote test_dictionaries_ddl/test.py::test_conflicting_name 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_cache]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node1_hashed]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_cache]' 'test_dictionaries_ddl/test.py::test_create_and_select_mysql[complex_node2_hashed]' test_dictionaries_ddl/test.py::test_dictionary_with_where test_dictionaries_ddl/test.py::test_file_dictionary_restrictions test_dictionaries_ddl/test.py::test_http_dictionary_restrictions test_dictionaries_ddl/test.py::test_named_collection test_dictionaries_ddl/test.py::test_restricted_database test_dictionaries_ddl/test.py::test_secure test_dictionaries_ddl/test.py::test_with_insert_query test_disable_insertion_and_mutation/test.py::test_disable_insertion_and_mutation test_disk_configuration/test.py::test_merge_tree_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_disk_setting test_disk_configuration/test.py::test_merge_tree_nested_custom_disk_setting test_disk_configuration/test.py::test_merge_tree_setting_override 'test_distributed_ddl/test.py::test_allowed_databases[configs]' 'test_distributed_ddl/test.py::test_allowed_databases[configs_secure]' 'test_distributed_ddl/test.py::test_create_as_select[configs]' 'test_distributed_ddl/test.py::test_create_as_select[configs_secure]' 'test_distributed_ddl/test.py::test_create_reserved[configs]' 'test_distributed_ddl/test.py::test_create_reserved[configs_secure]' 'test_distributed_ddl/test.py::test_create_view[configs]' 'test_distributed_ddl/test.py::test_create_view[configs_secure]' 'test_distributed_ddl/test.py::test_default_database[configs]' 'test_distributed_ddl/test.py::test_default_database[configs_secure]' 'test_distributed_ddl/test.py::test_detach_query[configs]' 'test_distributed_ddl/test.py::test_detach_query[configs_secure]' 'test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs]' 'test_distributed_ddl/test.py::test_disabled_distributed_ddl[configs_secure]' 'test_distributed_ddl/test.py::test_implicit_macros[configs]' 'test_distributed_ddl/test.py::test_implicit_macros[configs_secure]' 'test_distributed_ddl/test.py::test_kill_query[configs]' 'test_distributed_ddl/test.py::test_kill_query[configs_secure]' 'test_distributed_ddl/test.py::test_macro[configs]' 'test_distributed_ddl/test.py::test_macro[configs_secure]' -vvv " altinityinfra/integration-tests-runner:ad96270260ff ' returned non-zero exit status 1.